Sparse Kernel Feature Analysis

نویسندگان

  • Alex J. Smola
  • Olvi L. Mangasarian
چکیده

Kernel Principal Component Analysis (KPCA) has proven to be a versatile tool for unsupervised learning, however at a high computational cost due to the dense expansions in terms of kernel functions. We overcome this problem by proposing a new class of feature extractors employing`1 norms in coeecient space instead of the reproducing kernel Hilbert space in which KPCA was originally formulated in. Moreover, the modiied setting allows us to eeciently extract features maximizing criteria other than the variance much in a projection pursuit fashion.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gene Identification from Microarray Data for Diagnosis of Acute Myeloid and Lymphoblastic Leukemia Using a Sparse Gene Selection Method

Background: Microarray experiments can simultaneously determine the expression of thousands of genes. Identification of potential genes from microarray data for diagnosis of cancer is important. This study aimed to identify genes for the diagnosis of acute myeloid and lymphoblastic leukemia using a sparse feature selection method. Materials and Methods: In this descriptive study, the expressio...

متن کامل

Sparse Kernel Principal Component Analysis

'Kernel' principal component analysis (PCA) is an elegant nonlinear generalisation of the popular linear data analysis method, where a kernel function implicitly defines a nonlinear transformation into a feature space wherein standard PCA is performed. Unfortunately, the technique is not 'sparse', since the components thus obtained are expressed in terms of kernels associated with every trainin...

متن کامل

Sparse Kernel Orthonormalized PLS for feature extraction in large data sets

We propose a kernel extension of Orthonormalized PLS for feature extraction, within the framework of Kernel Multivariate Analysis (KMVA) KMVA methods have dense solutions and, therefore, scale badly for large datasets By imposing sparsity, we propose a modified KOPLS algorithm with reduced complexity (rKOPLS) The resulting scheme is a powerful feature extractor for regression and classification...

متن کامل

Sparse support vector machines by kernel discriminant analysis

We discuss sparse support vector machines (SVMs) by selecting the linearly independent data in the empirical feature space. First we select training data that maximally separate two classes in the empirical feature space. As a selection criterion we use linear discriminant analysis in the empirical feature space and select training data by forward selection. Then the SVM is trained in the empir...

متن کامل

Sparse Kernel Orthonormalized PLS for feature extraction in large data sets

In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features fo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999